cross-modal audio-video clustering
Self-Supervised Learning by Cross-Modal Audio-Video Clustering
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.
Review for NeurIPS paper: Self-Supervised Learning by Cross-Modal Audio-Video Clustering
Weaknesses: - Despite the extensive empirical evaluations, the three multimodal variants as proposed by the paper are direct extensions of the DeepCluster algorithm [4]. The main contributions appear to be (1) a working pipeline which demonstrates that variants of DeepCluster works with video and audio encoders; (2) scaling up the training to extremely large datasets. While both contributions are interesting, they appear to me to be less relevant to the audience of NeurIPS. It would also be great if such conjectures are accompanied with empirical evaluations on more diverse tasks than the three classification datasets. That would help the audience understand when to apply the XDC variant of DeepCluster (e.g. is it specific to audio and visual in videos, or is it more general?),
Review for NeurIPS paper: Self-Supervised Learning by Cross-Modal Audio-Video Clustering
The reviewers generally agree this paper has great execution, a great idea, and great results. The reviewers noted the impact that self-supervised learning on video can have, which has been less explored than the image counterpart. The reviewers also praised the strong empirical results, which will be of high interest to the community.
Self-Supervised Learning by Cross-Modal Audio-Video Clustering
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities.